11 research outputs found

    Fusion of 3D LIDAR and Camera Data for Object Detection in Autonomous Vehicle Applications

    Get PDF
    It’s critical for an autonomous vehicle to acquire accurate and real-time information of the objects in its vicinity, which will fully guarantee the safety of the passengers and vehicle in various environment. 3D LIDAR can directly obtain the position and geometrical structure of the object within its detection range, while vision camera is very suitable for object recognition. Accordingly, this paper presents a novel object detection and identification method fusing the complementary information of two kind of sensors. We first utilize the 3D LIDAR data to generate accurate object-region proposals effectively. Then, these candidates are mapped into the image space where the regions of interest (ROI) of the proposals are selected and input to a convolutional neural network (CNN) for further object recognition. In order to identify all sizes of objects precisely, we combine the features of the last three layers of the CNN to extract multi-scale features of the ROIs. The evaluation results on the KITTI dataset demonstrate that : (1) Unlike sliding windows that produce thousands of candidate object-region proposals, 3D LIDAR provides an average of 86 real candidates per frame and the minimal recall rate is higher than 95%, which greatly lowers the proposals extraction time; (2) The average processing time for each frame of the proposed method is only 66.79ms, which meets the real-time demand of autonomous vehicles; (3) The average identification accuracies of our method for car and pedestrian on the moderate level are 89.04% and 78.18% respectively, which outperform most previous methods

    Kinematic and Dynamic Vehicle Model-Assisted Global Positioning Method for Autonomous Vehicles with Low-Cost GPS/Camera/In-Vehicle Sensors

    No full text
    Real-time, precise and low-cost vehicular positioning systems associated with global continuous coordinates are needed for path planning and motion control in autonomous vehicles. However, existing positioning systems do not perform well in urban canyons, tunnels and indoor parking lots. To address this issue, this paper proposes a multi-sensor positioning system that combines a global positioning system (GPS), a camera and in-vehicle sensors assisted by kinematic and dynamic vehicle models. First, the system eliminates image blurring and removes false feature correspondences to ensure the local accuracy and stability of the visual simultaneous localisation and mapping (SLAM) algorithm. Next, the global GPS coordinates are transferred to a local coordinate system that is consistent with the visual SLAM process, and the GPS and visual SLAM tracks are calibrated with the improved weighted iterative closest point and least absolute deviation methods. Finally, an inverse coordinate system conversion is conducted to obtain the position in the global coordinate system. To improve the positioning accuracy, information from the in-vehicle sensors is fused with the interacting multiple-model extended Kalman filter based on kinematic and dynamic vehicle models. The developed algorithm was verified via intensive simulations and evaluated through experiments using KITTI benchmarks (A project of Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago) and data captured using our autonomous vehicle platform. The results show that the proposed positioning system improves the accuracy and reliability of positioning in environments in which the Global Navigation Satellite System is not available. The developed system is suitable for the positioning and navigation of autonomous vehicles

    Modern Machine Learning Techniques for Univariate Tunnel Settlement Forecasting: A Comparative Study

    No full text
    Tunnel settlement commonly occurs during the tunnel construction processes in large cities. Existing forecasting methods for tunnel settlements include model-based approaches and artificial intelligence (AI) enhanced approaches. Compared with traditional forecasting methods, artificial neural networks can be easily implemented, with high performance efficiency and forecasting accuracy. In this study, an extended machine learning framework is proposed combining particle swarm optimization (PSO) with support vector regression (SVR), back-propagation neural network (BPNN), and extreme learning machine (ELM) to forecast the surface settlement for tunnel construction in two large cities of China P.R. Based on real-world data verification, the PSO-SVR method shows the highest forecasting accuracy among the three proposed forecasting algorithms

    A Novel On-Ramp Merging Strategy for Connected and Automated Vehicles Based on Game Theory

    No full text
    Connected and automated vehicles (CAVs) have attracted much attention of researchers because of its potential to improve both transportation network efficiency and safety through control algorithms and reduce fuel consumption. However, vehicle merging at intersection is one of the main factors that lead to congestion and extra fuel consumption. In this paper, we focused on the scenario of on-ramp merging of CAVs, proposed a centralized approach based on game theory to control the process of on-ramp merging for all agents without any collisions, and optimized the overall fuel consumption and total travel time. For the framework of the game, benefit, loss, and rules are three basic components, and in our model, benefit is the priority of passing the merging point, represented via the merging sequence (MS), loss is the cost of fuel consumption and the total travel time, and the game rules are designed in accordance with traffic density, fairness, and wholeness. Each rule has a different degree of importance, and to get the optimal weight of each rule, we formulate the problem as a double-objective optimization problem and obtain the results by searching the feasible Pareto solutions. As to the assignment of merging sequence, we evaluate each competitor from three aspects by giving scores and multiplying the corresponding weight and the agent with the higher score gets comparatively smaller MS, i.e., the priority of passing the intersection. The simulations and comparisons are conducted to demonstrate the effectiveness of the proposed method. Moreover, the proposed method improved the fuel economy and saved the travel time

    Image Antiblurring and Statistic Filter of Feature Space Displacement: Application to Visual Odometry for Outdoor Ground Vehicle

    No full text
    Precise, reliable, and low-cost vehicular localization across a continuous spatiotemporal domain is an important problem in the field of outdoor ground vehicles. This paper proposes a visual odometry algorithm, where an ultrarobust and fast feature-matching scheme is combined with an effective antiblurring frame selection strategy. Our method follows the procedure of finding feature correspondences from consecutive frames and minimizing their reprojection error. The blurred image is a great challenge for localization with a sharp turn or fast movement. So we attempt to mitigate the impact of blur with an image singular value decomposition antiblurring algorithm. Moreover, a statistic filter of feature space displacement and circle matching are proposed to screen or prune potential matching features, so as to remove the outliers caused by mismatching. An evaluation of benchmark dataset KITTI and real outdoor data, with blur, low texture, and illumination change, demonstrates that the proposed ego-motion scheme significantly achieved performance with respect to the other state-of-the-art visual odometry approaches to a certain extent

    A Review of Image Super-Resolution Approaches Based on Deep Learning and Applications in Remote Sensing

    No full text
    At present, with the advance of satellite image processing technology, remote sensing images are becoming more widely used in real scenes. However, due to the limitations of current remote sensing imaging technology and the influence of the external environment, the resolution of remote sensing images often struggles to meet application requirements. In order to obtain high-resolution remote sensing images, image super-resolution methods are gradually being applied to the recovery and reconstruction of remote sensing images. The use of image super-resolution methods can overcome the current limitations of remote sensing image acquisition systems and acquisition environments, solving the problems of poor-quality remote sensing images, blurred regions of interest, and the requirement for high-efficiency image reconstruction, a research topic that is of significant relevance to image processing. In recent years, there has been tremendous progress made in image super-resolution methods, driven by the continuous development of deep learning algorithms. In this paper, we provide a comprehensive overview and analysis of deep-learning-based image super-resolution methods. Specifically, we first introduce the research background and details of image super-resolution techniques. Second, we present some important works on remote sensing image super-resolution, such as training and testing datasets, image quality and model performance evaluation methods, model design principles, related applications, etc. Finally, we point out some existing problems and future directions in the field of remote sensing image super-resolution

    A Review of Image Super-Resolution Approaches Based on Deep Learning and Applications in Remote Sensing

    No full text
    At present, with the advance of satellite image processing technology, remote sensing images are becoming more widely used in real scenes. However, due to the limitations of current remote sensing imaging technology and the influence of the external environment, the resolution of remote sensing images often struggles to meet application requirements. In order to obtain high-resolution remote sensing images, image super-resolution methods are gradually being applied to the recovery and reconstruction of remote sensing images. The use of image super-resolution methods can overcome the current limitations of remote sensing image acquisition systems and acquisition environments, solving the problems of poor-quality remote sensing images, blurred regions of interest, and the requirement for high-efficiency image reconstruction, a research topic that is of significant relevance to image processing. In recent years, there has been tremendous progress made in image super-resolution methods, driven by the continuous development of deep learning algorithms. In this paper, we provide a comprehensive overview and analysis of deep-learning-based image super-resolution methods. Specifically, we first introduce the research background and details of image super-resolution techniques. Second, we present some important works on remote sensing image super-resolution, such as training and testing datasets, image quality and model performance evaluation methods, model design principles, related applications, etc. Finally, we point out some existing problems and future directions in the field of remote sensing image super-resolution
    corecore